557 research outputs found
Measuring academic influence: Not all citations are equal
The importance of a research article is routinely measured by counting how
many times it has been cited. However, treating all citations with equal weight
ignores the wide variety of functions that citations perform. We want to
automatically identify the subset of references in a bibliography that have a
central academic influence on the citing paper. For this purpose, we examine
the effectiveness of a variety of features for determining the academic
influence of a citation. By asking authors to identify the key references in
their own work, we created a data set in which citations were labeled according
to their academic influence. Using automatic feature selection with supervised
machine learning, we found a model for predicting academic influence that
achieves good performance on this data set using only four features. The best
features, among those we evaluated, were those based on the number of times a
reference is mentioned in the body of a citing paper. The performance of these
features inspired us to design an influence-primed h-index (the hip-index).
Unlike the conventional h-index, it weights citations by how many times a
reference is mentioned. According to our experiments, the hip-index is a better
indicator of researcher performance than the conventional h-index
NatLogAttack: A Framework for Attacking Natural Language Inference Models with Natural Logic
Reasoning has been a central topic in artificial intelligence from the
beginning. The recent progress made on distributed representation and neural
networks continues to improve the state-of-the-art performance of natural
language inference. However, it remains an open question whether the models
perform real reasoning to reach their conclusions or rely on spurious
correlations. Adversarial attacks have proven to be an important tool to help
evaluate the Achilles' heel of the victim models. In this study, we explore the
fundamental problem of developing attack models based on logic formalism. We
propose NatLogAttack to perform systematic attacks centring around natural
logic, a classical logic formalism that is traceable back to Aristotle's
syllogism and has been closely developed for natural language inference. The
proposed framework renders both label-preserving and label-flipping attacks. We
show that compared to the existing attack models, NatLogAttack generates better
adversarial examples with fewer visits to the victim models. The victim models
are found to be more vulnerable under the label-flipping setting. NatLogAttack
provides a tool to probe the existing and future NLI models' capacity from a
key viewpoint and we hope more logic-based attacks will be further explored for
understanding the desired property of reasoning.Comment: Published as a conference paper at ACL 202
Neural Natural Language Inference Models Enhanced with External Knowledge
Modeling natural language inference is a very challenging task. With the
availability of large annotated data, it has recently become feasible to train
complex models such as neural-network-based inference models, which have shown
to achieve the state-of-the-art performance. Although there exist relatively
large annotated data, can machines learn all knowledge needed to perform
natural language inference (NLI) from these data? If not, how can
neural-network-based NLI models benefit from external knowledge and how to
build NLI models to leverage it? In this paper, we enrich the state-of-the-art
neural natural language inference models with external knowledge. We
demonstrate that the proposed models improve neural NLI models to achieve the
state-of-the-art performance on the SNLI and MultiNLI datasets.Comment: Accepted by ACL 201
- …